Token Cap Analysis
This post helps in analyzing crypto tokens bucketting them by their market cap categorization and compare the following in absolute and relative terms
[1] - How valued are they in the market
[2] - How actively are they traded
[3] - How concentrated are they hodl'ed
Data
We leverage data sources from CoinMarketCap and Ethplorer for this analysis.
import pandas as pd
import numpy as np
import requests
import os
import altair as alt
import warnings
from pandas import json_normalize
import plotly.express as px
warnings.filterwarnings("ignore")
pd.set_option("display.float_format", lambda x: "%.2f" % x)
pd.set_option("display.max_columns", 50)
alt.renderers.enable("default")
# alt.renderers.enable('altair_viewer')
RendererRegistry.enable('default')
We use CoinMarketCap API to pull top 1000 * listed tokens
We will use their latest/listings endpoint under Cryptocurrency category of API's
*top ranked by coinmarketcap
COINMARKETCAP_API_KEY = os.environ.get("COINMARKETCAP_API_KEY")
URL = "https://pro-api.coinmarketcap.com/v1/cryptocurrency/listings/latest"
PARAMS = {
"start": 1,
"limit": 1000,
"aux": "num_market_pairs,cmc_rank,date_added,tags,platform,max_supply,circulating_supply,total_supply,market_cap_by_total_supply,volume_24h_reported,volume_7d,volume_7d_reported,volume_30d,volume_30d_reported,is_market_cap_included_in_calc",
"CMC_PRO_API_KEY": COINMARKETCAP_API_KEY,
}
response = requests.get(url=URL, params=PARAMS)
df = json_normalize(response.json(), "data")
df.shape
(1000, 42)
This endpoint broadly gives 4 types of data
[1] Token's info - its symbol, token address, platform etc
[2] Tokens supply and market cap
[3] Volume traded
[4] Trend of volume traded for a window of 24hrs, 7, 30, 60 and 90 days
df.head(5)
| id | name | symbol | slug | num_market_pairs | date_added | tags | max_supply | circulating_supply | total_supply | is_market_cap_included_in_calc | platform | cmc_rank | self_reported_circulating_supply | self_reported_market_cap | tvl_ratio | last_updated | quote.USD.price | quote.USD.volume_24h | quote.USD.volume_24h_reported | quote.USD.volume_7d | quote.USD.volume_7d_reported | quote.USD.volume_30d | quote.USD.volume_30d_reported | quote.USD.volume_change_24h | quote.USD.percent_change_1h | quote.USD.percent_change_24h | quote.USD.percent_change_7d | quote.USD.percent_change_30d | quote.USD.percent_change_60d | quote.USD.percent_change_90d | quote.USD.market_cap | quote.USD.market_cap_dominance | quote.USD.fully_diluted_market_cap | quote.USD.tvl | quote.USD.market_cap_by_total_supply | quote.USD.last_updated | platform.id | platform.name | platform.symbol | platform.slug | platform.token_address | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | Bitcoin | BTC | bitcoin | 9777 | 2013-04-28T00:00:00.000Z | [mineable, pow, sha-256, store-of-value, state... | 21000000.00 | 19181737.00 | 19181737.00 | 1 | NaN | 1 | NaN | NaN | NaN | 2022-10-17T14:28:00.000Z | 19553.36 | 26105392035.29 | 91167084776.74 | 270502761031.41 | 917874592028.58 | 737514966525.49 | 87702220155056.41 | 80.48 | -0.04 | 2.22 | 1.07 | -1.96 | -16.66 | -12.06 | 375067443519.15 | 40.06 | 410620597806.24 | NaN | 375067443519.15 | 2022-10-17T14:28:00.000Z | NaN | NaN | NaN | NaN | NaN |
| 1 | 1027 | Ethereum | ETH | ethereum | 6139 | 2015-08-07T00:00:00.000Z | [pos, smart-contracts, ethereum-ecosystem, coi... | NaN | 122373863.50 | 122373863.50 | 1 | NaN | 2 | NaN | NaN | NaN | 2022-10-17T14:27:00.000Z | 1330.85 | 9870583976.87 | 45043638368.62 | 60366886659.34 | 269290512209.18 | 296193947881.93 | 1351483181263.92 | 65.22 | 0.67 | 3.72 | 1.37 | -7.02 | -28.50 | -13.50 | 162860855233.54 | 17.39 | 162860855233.54 | NaN | 162860855233.54 | 2022-10-17T14:27:00.000Z | NaN | NaN | NaN | NaN | NaN |
| 2 | 825 | Tether | USDT | tether | 40877 | 2015-02-25T00:00:00.000Z | [payments, stablecoin, asset-backed-stablecoin... | NaN | 68432559804.79 | 70146125804.24 | 1 | NaN | 3 | NaN | NaN | NaN | 2022-10-17T14:27:00.000Z | 1.00 | 36341478368.36 | 146100656120.30 | 226688836073.61 | 911761290272.67 | 1052657452335.71 | 4230371498325.82 | 52.99 | 0.00 | 0.00 | 0.00 | 0.01 | 0.00 | 0.01 | 68438363990.14 | 7.30 | 70152075327.64 | NaN | 70152075327.64 | 2022-10-17T14:27:00.000Z | 1027.00 | Ethereum | ETH | ethereum | 0xdac17f958d2ee523a2206206994597c13d831ec7 |
| 3 | 3408 | USD Coin | USDC | usd-coin | 6524 | 2018-10-08T00:00:00.000Z | [medium-of-exchange, stablecoin, asset-backed-... | NaN | 44995513224.72 | 44995513224.72 | 1 | NaN | 4 | NaN | NaN | NaN | 2022-10-17T14:27:00.000Z | 1.00 | 3160569147.74 | 6592379574.54 | 18809467321.23 | 40361910871.20 | 103640711276.33 | 216179896466.05 | 37.14 | 0.00 | 0.01 | -0.00 | 0.00 | 0.01 | 0.04 | 44998922083.68 | 4.81 | 44998922083.68 | NaN | 44998922083.68 | 2022-10-17T14:27:00.000Z | 1027.00 | Ethereum | ETH | ethereum | 0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48 |
| 4 | 1839 | BNB | BNB | bnb | 1126 | 2017-07-25T00:00:00.000Z | [marketplace, centralized-exchange, payments, ... | 200000000.00 | 161337261.09 | 161337261.09 | 1 | NaN | 5 | NaN | NaN | NaN | 2022-10-17T14:27:00.000Z | 274.17 | 627480239.47 | 1544132230.07 | 4176005332.96 | 9851631907.22 | 35906745156.56 | 101276799338.04 | 24.98 | -0.01 | 1.58 | -0.34 | -1.10 | -10.85 | 4.35 | 44233111221.83 | 4.72 | 54833100454.28 | NaN | 44233111221.83 | 2022-10-17T14:27:00.000Z | NaN | NaN | NaN | NaN | NaN |
df['platform.name'] = np.where(df['platform.name'].isnull(), "Not Available", df['platform.name'])
df['market_cap_BB'] = df['quote.USD.market_cap']/1000000000
Token Caps
In this section we split all tokens into one of
- Mega Cap : >200B USD market cap
- Large Cap : 10B USD to 200B USD
- Mid Cap : 2B USD to 10B USD
- Small Cap : 300M USD to 2B USD
- Micro Cap : 50M USD to 300M USD
- Nano Cap : <50M
according to standard thresholds of market capitalization. Read here for more on this
df['token_cap'] = 'NA'
df['token_cap'] = np.where(df['market_cap_BB'] > 200, 'Mega Cap', df['token_cap'])
df['token_cap'] = np.where((df['market_cap_BB'] > 10) & (df['market_cap_BB'] <= 200), 'Large Cap', df['token_cap'])
df['token_cap'] = np.where((df['market_cap_BB'] > 2) & (df['market_cap_BB'] <= 10), 'Mid Cap', df['token_cap'])
df['token_cap'] = np.where((df['market_cap_BB'] > 0.3) & (df['market_cap_BB'] <= 2), 'Small Cap', df['token_cap'])
df['token_cap'] = np.where((df['market_cap_BB'] > 0.05) & (df['market_cap_BB'] <= 0.3), 'Micro Cap', df['token_cap'])
df['token_cap'] = np.where((df['market_cap_BB'] <= 0.05), 'Nano Cap', df['token_cap'])
df['token_cap'] = pd.Categorical(df['token_cap'], ["Mega Cap", "Large Cap", "Mid Cap", "Small Cap", "Micro Cap", "Nano Cap"])
In this section, we visualize the number of tokens in each market cap category we created. We also take a cap category and individual token view in the next plot
# Tokens in each market cap category created
bars = (
alt.Chart(df)
.mark_bar(cornerRadiusTopLeft=3, cornerRadiusTopRight=3)
.encode(
x=alt.X("token_cap:O", sort="y"),
y="count():Q",
color="token_cap:N",
tooltip=["token_cap", "count(token_cap)"],
)
)
text = bars.mark_text(
align="left",
baseline="middle",
dx=-5,
dy=-5, # Nudges text to right so it doesn't appear on top of the bar
).encode(text="count(token_cap):Q")
(bars + text).properties(height=400, width=850)
96% of the top 1000 tokens are either small cap or of even smaller cap segments
3% of the top 1000 tokens are classified as mid cap
Top 1% tokens could be classified as Large cap. BTC is the only eligible Mega Cap
Tokens by market cap - Granular View
fig = px.treemap(
df,
path=[px.Constant("all"), "token_cap", "symbol"],
values="market_cap_BB",
color="token_cap",
color_discrete_map={
"Mega Cap": "lightgrey",
"Large Cap": "gold",
"Mid Cap": "cyan",
"Small Cap": "Violet",
"Micro Cap": "red",
"Nano Cap": "green",
},
)
fig.update_layout(margin=dict(t=50, l=25, r=25, b=25))
fig.show()
Large Cap segment is dominated by ETH, USDT.
All 8 tokens taken together they command a market cap of 383B USD (compared to BTC alone having a market cap of 366B USD)
Mid Cap is dominated by DOGE, MATIC.
Its 27 tokens have a combined market cap of 113B USD (~ 1/3rd of Large Cap segment)
Small cap has a combined market of ~60B
followed by Micro cap tokens and Nano cap having a combined market cap of 27B USD and 11B USD respectively
Note that this is an interactive chart - click on the segment name or a particular token symbol to have a more granular view
In this section, we derive the VolTraded/MarketCap Ratio
This ratio (Vol/MarketCap) signifies the volume traded in a given period as a proportion of market capitalization
We visualize the median ratio for each token cap category
df['24hr_vol_traded_by_market_cap'] = df['quote.USD.volume_24h_reported'] / df['quote.USD.market_cap']
df['7day_vol_traded_by_market_cap'] = df['quote.USD.volume_7d_reported'] / df['quote.USD.market_cap']
df['30day_vol_traded_by_market_cap'] = df['quote.USD.volume_30d_reported'] / df['quote.USD.market_cap']
chart_list = []
for metric in ["24hr_vol_traded_by_market_cap","7day_vol_traded_by_market_cap","30day_vol_traded_by_market_cap"]:
source = pd.DataFrame(df.groupby("token_cap").agg({metric: "median"})).reset_index()
source
bars = (
alt.Chart(
source,
title=f"{metric.split('_')[0]} Median - VolumeTraded/MarketCap Ratio ",
)
.mark_bar(cornerRadiusTopLeft=3, cornerRadiusTopRight=3)
.encode(
x=alt.X(
"token_cap:O",
sort=["Mega Cap","Large Cap","Mid Cap","Small Cap","Micro Cap","Nano Cap"],
),
y=metric,
color="token_cap:N",
tooltip=["token_cap", metric],
)
)
text = bars.mark_text(
align="left",
baseline="middle",
dx=-5,
dy=-5, # Nudges text to right so it doesn't appear on top of the bar
).encode(text=alt.Text(metric, format=".0%"))
chart = (bars + text).properties(height=400, width=850)
chart_list.append(chart)
alt.vconcat(*chart_list).configure(numberFormat="%")
One chart for each of last 24 hrs, 7 days, and 30 days median volTrade/MarketCap ratio
For all time period for which the ratio is calculated, we see a decreasing trend as we move from Mega Cap to Nano Cap indicating that tokens belonging to smaller caps have lower liquidity compared to the larger ones
Although we see a decreasing trend from Mega to Nano cap tokens, do all tokens conform to this pattern or do we see outlier
The next chart will help identify that
chart_list = []
for metric in ["24hr_vol_traded_by_market_cap","7day_vol_traded_by_market_cap","30day_vol_traded_by_market_cap"]:
chart = (
alt.Chart(df, title=f"{metric.split('_')[0]} VolumeTraded/MarketCap Ratio ")
.mark_boxplot(ticks=True, size=70)
.encode(
x=alt.X(
"token_cap:O",
title=None,
axis=alt.Axis(labels=True, ticks=False),
scale=alt.Scale(padding=1),
sort=["Mega Cap","Large Cap","Mid Cap","Small Cap","Micro Cap","Nan Cap"],
),
y=alt.Y(metric),
tooltip=["token_cap", metric],
color="token_cap",
)
.properties(width=800)
)
chart_list.append(chart)
alt.vconcat(*chart_list).configure_facet(spacing=0).configure_view(stroke=None)
Although we see the median volTraded/marketCap ratio is decreasing as we move from Mega Cap to Nano Cap
There are a bunch of tokens in Micro, Nano Cap with higher ratios indicating higher trading activity compared to their peers in their respective market cap categories
Similar to the Know Your Token blog post (Top Token Holders), where we analyzed the share of top 100 wallets for a given token
We do the same for each of the 1000 tokens and see the token's share of these top 100 wallets
Note: This info is available only for tokens part of ethereum chain. So the below analysis will be restricted to them only. Also, the below snippet (commented) leverages Ethplorer Top Token Holders endpoint to get the top 100 wallets share of a given token
ETHPLORER_API_KEY = os.environ.get("ETHPLORER_API_KEY")
PARAMS = {"limit": 100, "apiKey": ETHPLORER_API_KEY}
total_rows = df.shape[0]
success_count = 0
for index, row in df.iterrows():
token_address = row["platform.token_address"]
URL = f"https://api.ethplorer.io/getTopTokenHolders/{token_address}/"
try:
response = requests.get(url=URL, params=PARAMS)
except Exception as e:
print(e)
continue
if response.status_code == 200:
success_count += 1
token_share = json_normalize(response.json(), "holders")
if not token_share.empty:
df.loc[index, "top_100_wallets_share"] = token_share["share"].sum()
if response.status_code == 429:
print("RATE LIMITED")
break
# if index%10 == 0:
# print(f"completed {index} / {total_rows} | success_count: {success_count}")
df["is_ethereum_chain"] = np.where(df["top_100_wallets_share"].isnull(), "NO", "YES")
df["top_100_wallets_share"] = df["top_100_wallets_share"] / 100
df_ethereum = df[df["is_ethereum_chain"] == "YES"]
metric = "top_100_wallets_share"
source = pd.DataFrame(
df_ethereum.groupby("token_cap").agg({metric: "mean"})
).reset_index()
source
bars = (
alt.Chart(
source,
title="Median share(%) held by top 100 wallets (median aggregated for tokens in each token cap) ",
)
.mark_bar(cornerRadiusTopLeft=3, cornerRadiusTopRight=3)
.encode(
x=alt.X(
"token_cap:O",
sort=["Mega Cap","Large Cap","Mid Cap","Small Cap","Micro Cap","Nano Cap"],
),
y=metric,
color="token_cap:N",
tooltip=["token_cap", metric],
)
)
text = bars.mark_text(align="left", baseline="middle", dx=-5, dy=-5).encode(
text=alt.Text(metric, format=".2%")
)
(bars + text).properties(height=400, width=850)
Top 100 wallets have on average 80% of tokens with them in mid cap segment and that number increases to 93% for Nano Cap tokens
In addition to the market cap and volume trade, concentration of token supply can also help us assess tokens for their liquidity
chart_list = []
metric = "top_100_wallets_share"
chart = (
alt.Chart(df, title="Top 100 Wallets' share")
.mark_boxplot(ticks=True, size=70)
.encode(
x=alt.X(
"token_cap:O",
title=None,
axis=alt.Axis(labels=True, ticks=False),
scale=alt.Scale(padding=1),
sort=["Mega Cap","Large Cap","Mid Cap","Small Cap","Micro Cap","Nano Cap"],
),
y=alt.Y(metric),
tooltip=["token_cap", metric],
color="token_cap",
)
.properties(width=800)
)
chart_list.append(chart)
alt.vconcat(*chart_list).configure_facet(spacing=0).configure_view(stroke=None)
Similar to what we observed in the case of VolTraded/MarketCap ratio, there are bunch of tokens with lesser concentration* of the token's overall share among top 100 wallets* in each of the different Cap Categories
chart_list = []
for metric in ["24hr_vol_traded_by_market_cap","7day_vol_traded_by_market_cap","30day_vol_traded_by_market_cap"]:
for token_cap in ["Mid Cap"]:
scatter = (
alt.Chart(
df_ethereum[df_ethereum["token_cap"] == token_cap],
title=f"{token_cap} - {metric} vs top 100 Wallets Share in the token",
)
.mark_point()
.encode(
x="top_100_wallets_share",
y=metric,
size="market_cap_BB",
tooltip=["symbol", metric, "top_100_wallets_share", "token_cap"],
)
.properties(width=800)
)
vline = (
alt.Chart(df_ethereum[df_ethereum["token_cap"] == token_cap])
.mark_rule(color="blue")
.encode(y=f"median({metric}):Q")
)
hline = (
alt.Chart(df_ethereum[df_ethereum["token_cap"] == token_cap])
.mark_rule(color="blue")
.encode(
x="mean(top_100_wallets_share):Q",
)
)
chart_list.append(scatter + vline + hline)
alt.vconcat(*chart_list).resolve_scale(size="independent")
In this chart, we have filtered only Midcap segment for analysis
Tokens share concentration (in the top 100 wallets) in the X axis and VolTraded/MarketCap ratio** in the Y axis.
Market Cap is represented by the size of the bubble
The vertical line represents the average concentration of token share for Mid Cap segment
The horizontal line represents the median VolTraded/MarketCap ratio for Mid Cap segment
The top left quadrant represents tokens that are likely held by more number of wallets and also has high trade activity
The top right quadrant represents tokens that are mostly held by a select few 100 wallets but with high volume trade activity
The bottom left quadrant represents tokens that are likely held by more number of wallets but the trade activity is low
The bottom right quadrant represents tokens that are mostly held by a select few 100 wallets and their trade activity is also very low
With this we come to the end of this blog post analyzing top 1000 tokens. Hope you found this analysis useful to get a better understanding of tokens and use this to further your own analysis